augmented lagrangian
Fast Equivariant Imaging: Acceleration for Unsupervised Learning via Augmented Lagrangian and Auxiliary PnP Denoisers
Xu, Guixian, Li, Jinglai, Tang, Junqi
In this work, we propose Fast Equivariant Imaging (FEI), a novel unsupervised learning framework to rapidly and efficiently train deep imaging networks without ground-truth data. From the perspective of reformulating the Equivariant Imaging based optimization problem via the method of Lagrange multipliers and utilizing plug-and-play denoisers, this novel unsupervised scheme shows superior efficiency and performance compared to the vanilla Equivariant Imaging paradigm. In particular, our FEI schemes achieve an order-of-magnitude (10x) acceleration over standard EI on training U-Net for X-ray CT reconstruction and image inpainting, with improved generalization performance.
- North America > Canada > Quebec > Montreal (0.04)
- Europe > France > Occitanie > Haute-Garonne > Toulouse (0.04)
- North America > United States > Virginia (0.04)
- (4 more...)
- Energy (0.46)
- Government > Regional Government > North America Government > United States Government (0.46)
Scalable Mixed-Integer Optimization with Neural Constraints via Dual Decomposition
Zeng, Shuli, Zhang, Sijia, Wu, Feng, Tang, Shaojie, Li, Xiang-Yang
Abstract--Embedding deep neural networks (NNs) into mixed-integer programs (MIPs) is attractive for decision making with learned constraints, yet state-of-the-art "monolithic" linearisa-tions blow up in size and quickly become intractable. In this paper, we introduce a novel dual-decomposition framework that relaxes the single coupling equality u = x with an augmented Lagrange multiplier and splits the problem into a vanilla MIP and a constrained NN block. Each part is tackled by the solver that suits it best--branch & cut for the MIP subproblem, first-order optimisation for the NN subproblem--so the model remains modular, the number of integer variables never grows with network depth, and the per-iteration cost scales only linearly with the NN size. LIB benchmark, our method proves scalable, modular, and adaptable: it runs 120 faster than an exact Big-M formulation on the largest test case; the NN sub-solver can be swapped from a log-barrier interior step to a projected-gradient routine with no code changes and identical objective value; and swapping the MLP for an LSTM backbone still completes the full optimisation in 47s without any bespoke adaptation. Intelligent decision systems increasingly integrate neural networks into decision-making and optimization pipelines [1-3].
- North America > United States > New York > Erie County > Buffalo (0.04)
- Asia > China > Anhui Province > Hefei (0.04)
- Africa > Middle East > Egypt (0.04)
AL-CoLe: Augmented Lagrangian for Constrained Learning
Boero, Ignacio, Hounie, Ignacio, Ribeiro, Alejandro
Despite the non-convexity of most modern machine learning parameterizations, Lagrangian duality has become a popular tool for addressing constrained learning problems. We revisit Augmented Lagrangian methods, which aim to mitigate the duality gap in non-convex settings while requiring only minimal modifications, and have remained comparably unexplored in constrained learning settings. We establish strong duality results under mild conditions, prove convergence of dual ascent algorithms to feasible and optimal primal solutions, and provide PAC-style generalization guarantees. Finally, we demonstrate its effectiveness on fairness constrained classification tasks.
- North America > United States > Pennsylvania (0.40)
- South America > Chile > Santiago Metropolitan Region > Santiago Province > Santiago (0.04)
Export Reviews, Discussions, Author Feedback and Meta-Reviews
First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. The paper analyzes ADMM with the quadratic term in the augmented Lagrangian replaced by a Bregman divergence. Authors prove convergence rate results where L2 distance (of the standard ADMM guarantee) is replaced by the corresponding Bregman divergence. Similar to mirror descent, this can yield an improvement in the dimension-dependent constants of convergence bounds. Authors then demonstrate the utility of their algorithm on synthetic instances of a specific optimization problem (mass transportation problem).
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > Rhode Island > Providence County > Providence (0.04)
- North America > United States > New York (0.04)
- (2 more...)
ecg2o: A Seamless Extension of g2o for Equality-Constrained Factor Graph Optimization
Abdelkarim, Anas, Voos, Holger, Görges, Daniel
Factor graph optimization serves as a fundamental framework for robotic perception, enabling applications such as pose estimation, simultaneous localization and mapping (SLAM), structure-from-motion (SfM), and situational awareness. Traditionally, these methods solve unconstrained least squares problems using algorithms such as Gauss-Newton and Levenberg-Marquardt. However, extending factor graphs with native support for equality constraints can improve solution accuracy and broaden their applicability, particularly in optimal control. In this paper, we propose a novel extension of factor graphs that seamlessly incorporates equality constraints without requiring additional optimization algorithms. Our approach maintains the efficiency and flexibility of existing second-order optimization techniques while ensuring constraint feasibility. To validate our method, we apply it to an optimal control problem for velocity tracking in autonomous vehicles and benchmark our results against state-of-the-art constraint handling techniques. Additionally, we introduce ecg2o, a header-only C++ library that extends the widely used g2o factor graph library by adding full support for equality-constrained optimization. This library, along with demonstrative examples and the optimal control problem, is available as open source at https://github.com/snt-arg/ecg2o
- Europe > Germany > Rhineland-Palatinate > Kaiserslautern (0.05)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Europe > Portugal > Lisbon > Lisbon (0.04)